PINN with Data

Solid Mechanics Example


By Prof. Seungchul Lee
http://iai.postech.ac.kr/
Industrial AI Lab at POSTECH

Table of Contents


1. Thin Plate (Solid Mechanics)¶

1.1. Problem Setup¶

  • We will solve thin plate equations to find displacement and stress distribution of thin plate
  • Based on Kirchhoff-Love plate theory, three hypotheses were used
    • straight lines normal to the mid-surface remain straight after deformation
    • straight lines normal to the mid-surface remain normal to the mid-surface after deformation
    • the thickness of the plate does not change during a deformation
  • A non-uniform stretching force is applied to square elastic plate
  • Only one quarter of the plate is considered since the geometry and in-plane forces are symmetric (yellow domain)





  • Problem properties


$$ E = 50 \operatorname{Mpa}, \quad \nu = 0.3, \quad \omega = 20 \operatorname{mm}, \quad h = 1 \operatorname{mm}, \quad f = 1 \operatorname{Mpa} $$

  • Governing equations (Fƶppl–von KĆ”rmĆ”n equations) for the isotropic elastic plate:


$$ \begin{align*} &{E \over 1 - \nu^2}\left({\partial^2 u \over \partial x^2} + {1 - \nu \over 2}{\partial^2 u \over \partial y^2} + {1 + \nu \over 2}{\partial^2 v \over \partial x \partial y} \right) = 0\\\\ &{E \over 1 - \nu^2}\left({\partial^2 v \over \partial y^2} + {1 - \nu \over 2}{\partial^2 v \over \partial x^2} + {1 + \nu \over 2}{\partial^2 x \over \partial x \partial y} \right) = 0 \end{align*} $$

  • Two Dirichlet boundary conditions at $x = 0,\, y = 0\; (B.C.ā‘ , B.C.ā‘”)$:


$$ v(x,y) = 0 \qquad \text{at} \quad y = 0\\\\ u(x,y) = 0 \qquad \text{at} \quad x = 0 $$

  • Two free boundary conditions at $y = \omega / 2\; (B.C.ā‘¢)$:


$$ \sigma_{yy} = 0,\quad \sigma_{yx} = 0 $$

  • Free boundary condition and in-plane force boundary condition at $x = \omega / 2\; (B.C.ā‘£)$:


$$ \sigma_{xx} = P \centerdot h,\quad \sigma_{xy} = 0 $$

  • Make a neural network and loss funcitons like below:



1.2. Numerical Solution¶

  • Numerical solution of this problem is illustrated in below figures
    • $x, y$ direction displacement and stress $u$, $v$, $\sigma_x$, $\sigma_y$, respectively
  • Solve this problem using PINN and then compare with a numerical solution



1.3. PINN as PDE Solver¶

1.3.1. Define Parameters¶

InĀ [1]:
import numpy as np
import matplotlib.pyplot as plt
import random
import deepxde as dde
import pandas as pd
import torch
from sklearn.metrics import r2_score
Using backend: pytorch

InĀ [2]:
import os
os.environ["CUDA_DEVICE_ORDER"]="PCI_BUS_ID"
os.environ["CUDA_VISIBLE_DEVICES"]="2"
InĀ [3]:
# Properties
E = 50
nu = 0.3
a = 10
b = 10
f = 1
h = 1

1.3.2. Define PDE with Boundary Conditions¶

InĀ [4]:
def boundary_x(X, on_boundary):
    return on_boundary and np.isclose(X[0], a)

def boundary_y(X, on_boundary):
    return on_boundary and np.isclose(X[1], b)

def boundary_y0(X, on_boundary):
    return on_boundary and np.isclose(X[1], 0)

def boundary_x0(X, on_boundary):
    return on_boundary and np.isclose(X[0], 0)
InĀ [5]:
def pde(X, Y):
    du_xx = dde.grad.hessian(Y, X, i = 0, j = 0, component = 0)
    du_yy = dde.grad.hessian(Y, X, i = 1, j = 1, component = 0)
    du_xy = dde.grad.hessian(Y, X, i = 0, j = 1, component = 0)
    dv_xx = dde.grad.hessian(Y, X, i = 0, j = 0, component = 1)
    dv_yy = dde.grad.hessian(Y, X, i = 1, j = 1, component = 1)
    dv_xy = dde.grad.hessian(Y, X, i = 0, j = 1, component = 1)
    
    force_eq_x = (du_xx + 0.5*(1 - nu)*du_yy + 0.5*(1 + nu)*dv_xy)*E/(1 - nu**2)
    force_eq_y = (dv_yy + 0.5*(1 - nu)*dv_xx + 0.5*(1 + nu)*du_xy)*E/(1 - nu**2)
    
    return [force_eq_x, force_eq_y]
InĀ [6]:
def BC_xy(X, Y):
    du_x = dde.grad.jacobian(Y, X, i = 0, j = 0)
    du_y = dde.grad.jacobian(Y, X, i = 0, j = 1)
    dv_x = dde.grad.jacobian(Y, X, i = 1, j = 0)
    dv_y = dde.grad.jacobian(Y, X, i = 1, j = 1)
    
    sig_xx = (du_x + nu*dv_y)*E/(1 - nu**2)
    sig_yy = (dv_y + nu*du_x)*E/(1 - nu**2)
    sig_xy = (dv_x + du_y)*E/(1 + nu)/2
    
    sig_ex = f * h * torch.cos(3.1415/(2*b)*X[:,1]).reshape(-1,1)
    
    return sig_xx-sig_ex, sig_xy, sig_yy

1.3.3. Define Geometry and Implement Boundary Condition¶

InĀ [7]:
geom = dde.geometry.Rectangle(xmin=[0, 0], xmax=[a, b])

bc_Y0 = dde.DirichletBC(geom, lambda x:0, boundary_y0, component = 1)
bc_X0 = dde.DirichletBC(geom, lambda x:0, boundary_x0, component = 0)

bc_free_yy = dde.OperatorBC(geom, lambda x, y, _ : BC_xy(x,y)[2], boundary_y)
bc_free_yx = dde.OperatorBC(geom, lambda x, y, _ : BC_xy(x,y)[1], boundary_y)

bc_traction = dde.OperatorBC(geom, lambda x, y, _ : BC_xy(x,y)[0], boundary_x)
bc_free_xy = dde.OperatorBC(geom, lambda x, y, _ : BC_xy(x,y)[1], boundary_x)
InĀ [8]:
data = dde.data.PDE(geom, 
                    pde, 
                    [bc_traction, bc_free_xy, bc_free_yx, bc_free_yy, bc_Y0, bc_X0], 
                    num_domain = 10000,
                    num_boundary = 1000,
                    num_test = 100,
                    train_distribution = 'LHS')
InĀ [9]:
plt.figure(figsize = (5,5))
plt.scatter(data.train_x_all[:,0], data.train_x_all[:,1], s = 0.1)
plt.xlabel('x (mm)')
plt.ylabel('y (mm)')
plt.show()

1.3.4. Define Network and Hyper-parameters¶

InĀ [10]:
layer_size = [2] + [64] * 5 + [2]
activation = "tanh"
initializer = "Glorot uniform"

net = dde.maps.FNN(layer_size, activation, initializer)

model = dde.Model(data, net)
model.compile("adam", lr = 1e-3)
Compiling model...
'compile' took 0.000315 s

1.3.5. Train (Adam Optimizer)¶

InĀ [11]:
losshistory, train_state = model.train(epochs = 10000)
dde.saveplot(losshistory, train_state, issave = False, isplot = True)
Training model...

Step      Train loss                                                                          Test loss                                                                           Test metric
0         [2.95e+00, 2.61e+00, 4.25e+00, 1.53e+00, 3.62e+00, 1.47e+01, 7.46e-03, 2.00e-02]    [3.09e+00, 2.67e+00, 4.25e+00, 1.53e+00, 3.62e+00, 1.47e+01, 7.46e-03, 2.00e-02]    []  
1000      [6.66e-04, 4.43e-04, 2.50e-04, 1.26e-04, 5.44e-05, 6.65e-05, 8.56e-05, 2.37e-04]    [5.24e-04, 3.32e-04, 2.50e-04, 1.26e-04, 5.44e-05, 6.65e-05, 8.56e-05, 2.37e-04]    []  
2000      [3.85e-04, 3.34e-04, 1.19e-04, 7.17e-05, 3.37e-05, 5.67e-05, 3.08e-05, 7.36e-05]    [2.76e-04, 2.55e-04, 1.19e-04, 7.17e-05, 3.37e-05, 5.67e-05, 3.08e-05, 7.36e-05]    []  
3000      [3.00e-04, 2.51e-04, 9.61e-05, 7.80e-05, 2.59e-05, 4.53e-05, 2.25e-05, 6.06e-05]    [2.14e-04, 1.82e-04, 9.61e-05, 7.80e-05, 2.59e-05, 4.53e-05, 2.25e-05, 6.06e-05]    []  
4000      [2.46e-04, 2.07e-04, 7.28e-05, 5.79e-05, 2.59e-05, 4.13e-05, 2.00e-05, 6.11e-05]    [1.65e-04, 1.46e-04, 7.28e-05, 5.79e-05, 2.59e-05, 4.13e-05, 2.00e-05, 6.11e-05]    []  
5000      [1.74e-04, 1.97e-04, 5.53e-05, 5.26e-05, 2.29e-05, 3.86e-05, 1.05e-05, 5.01e-05]    [1.07e-04, 1.41e-04, 5.53e-05, 5.26e-05, 2.29e-05, 3.86e-05, 1.05e-05, 5.01e-05]    []  
6000      [9.33e-05, 1.87e-04, 9.28e-05, 8.05e-05, 7.78e-06, 9.14e-05, 6.30e-06, 4.01e-05]    [4.98e-05, 1.51e-04, 9.28e-05, 8.05e-05, 7.78e-06, 9.14e-05, 6.30e-06, 4.01e-05]    []  
7000      [6.10e-04, 2.44e-04, 5.20e-04, 1.42e-04, 1.65e-04, 9.99e-04, 9.65e-06, 3.78e-05]    [5.85e-04, 1.95e-04, 5.20e-04, 1.42e-04, 1.65e-04, 9.99e-04, 9.65e-06, 3.78e-05]    []  
8000      [2.68e-05, 6.31e-05, 5.11e-06, 1.24e-05, 6.37e-06, 9.47e-06, 3.53e-06, 2.02e-05]    [1.48e-05, 3.99e-05, 5.11e-06, 1.24e-05, 6.37e-06, 9.47e-06, 3.53e-06, 2.02e-05]    []  
9000      [1.97e-05, 3.36e-05, 8.51e-06, 5.24e-06, 4.37e-06, 4.97e-06, 3.73e-06, 1.35e-05]    [1.13e-05, 1.92e-05, 8.51e-06, 5.24e-06, 4.37e-06, 4.97e-06, 3.73e-06, 1.35e-05]    []  
10000     [1.57e-05, 2.31e-05, 5.59e-06, 4.51e-06, 3.33e-06, 8.52e-06, 4.15e-06, 1.03e-05]    [1.01e-05, 1.16e-05, 5.59e-06, 4.51e-06, 3.33e-06, 8.52e-06, 4.15e-06, 1.03e-05]    []  

Best model at step 10000:
  train loss: 7.53e-05
  test loss: 5.81e-05
  test metric: []

'train' took 405.507714 s

1.3.6. Plot Displacement (Adam Optimzer)¶

InĀ [12]:
color_legend = [[0, 0.182], [-0.06, 0.011], [-0.0022,1.0], [-0.15, 0.45]]
title = ['x-displacement ($u$)', 'y-displacement ($v$)', '$\sigma_{xx}$', '$\sigma_{yy}$']
InĀ [13]:
samples = geom.uniform_points(10201)
pde_disp = model.predict(samples)

plt.figure(figsize = (10, 4))
for idx in range(2):
    plt.subplot(1,2,idx+1)
    plt.scatter(samples[:, 0],
                samples[:, 1],
                c = pde_disp[:, idx],
                cmap = 'rainbow',
                s = 5)
    plt.clim(color_legend[idx])
    plt.title(title[idx])
    plt.xlim((0, a))
    plt.ylim((0, b))
    plt.colorbar()
plt.tight_layout()
plt.show()

1.3.7. Train More (L-BFGS Optimizer)¶

InĀ [14]:
dde.optimizers.config.set_LBFGS_options(maxiter = 5000)
model.compile("L-BFGS")
losshistory, train_state = model.train()
dde.saveplot(losshistory, train_state, issave = False, isplot = True)
Compiling model...
'compile' took 0.000308 s

Training model...

Step      Train loss                                                                          Test loss                                                                           Test metric
10000     [1.57e-05, 2.31e-05, 5.59e-06, 4.51e-06, 3.33e-06, 8.52e-06, 4.15e-06, 1.03e-05]    [1.01e-05, 1.16e-05, 5.59e-06, 4.51e-06, 3.33e-06, 8.52e-06, 4.15e-06, 1.03e-05]    []  
11000     [3.21e-06, 1.68e-06, 1.90e-07, 2.07e-07, 2.24e-07, 3.11e-07, 7.95e-07, 3.50e-07]    [2.08e-06, 1.24e-06, 1.90e-07, 2.07e-07, 2.24e-07, 3.11e-07, 7.95e-07, 3.50e-07]    []  
12000     [1.64e-06, 8.77e-07, 1.57e-07, 9.55e-08, 6.04e-08, 3.12e-07, 1.64e-07, 1.30e-07]    [1.08e-06, 5.83e-07, 1.57e-07, 9.54e-08, 6.04e-08, 3.12e-07, 1.64e-07, 1.30e-07]    []  
13000     [1.01e-06, 6.84e-07, 1.03e-07, 5.78e-08, 6.37e-08, 2.87e-07, 1.15e-07, 7.06e-08]    [6.76e-07, 4.63e-07, 1.03e-07, 5.78e-08, 6.37e-08, 2.87e-07, 1.15e-07, 7.06e-08]    []  
14000     [8.27e-07, 5.16e-07, 8.10e-08, 7.06e-08, 7.72e-08, 2.40e-07, 6.48e-08, 6.03e-08]    [6.05e-07, 3.78e-07, 8.10e-08, 7.06e-08, 7.72e-08, 2.40e-07, 6.48e-08, 6.03e-08]    []  
15000     [8.23e-07, 5.15e-07, 8.04e-08, 7.09e-08, 7.71e-08, 2.42e-07, 6.44e-08, 5.95e-08]    [6.02e-07, 3.76e-07, 8.04e-08, 7.09e-08, 7.71e-08, 2.42e-07, 6.44e-08, 5.95e-08]    []  

Best model at step 15000:
  train loss: 1.93e-06
  test loss: 1.57e-06
  test metric: []

'train' took 300.510463 s

1.3.8. Plot Results (Adam + L-BFGS)¶

InĀ [15]:
pde_disp = model.predict(samples)

plt.figure(figsize = (10, 4))
for idx in range(2):
    plt.subplot(1,2,idx+1)
    plt.scatter(samples[:, 0],
                samples[:, 1],
                c = pde_disp[:, idx],
                cmap = 'rainbow',
                s = 5)
    plt.clim(color_legend[idx])
    plt.title(title[idx], fontsize = 15)
    plt.xlabel('x (mm)', fontsize = 14)
    plt.ylabel('y (mm)', fontsize = 14)
    plt.axis('square')
    plt.xlim((0, a))
    plt.ylim((0, b))
    plt.colorbar()
plt.tight_layout()
plt.savefig('PDE_disp.png', dpi=300, transparent = True)
plt.show()

1.3.9 Plot Stress¶

  • Since the neural network's output is only x and y displacement, we should autograd to calculate stress
    • If we assume that the Cauchy stress tensor components are linearly related to the von KĆ”rmĆ”n strains by Hooke's law, stress can be represented as:


$$ \sigma_{xx} = {E \over 1 - \nu^2}\left({\partial u \over \partial x} + \nu {\partial v \over \partial y} \right)\\\\ \sigma_{yy} = {E \over 1 - \nu^2}\left({\partial v \over \partial y} + \nu {\partial u \over \partial x} \right) $$

InĀ [16]:
def check_stress(net, X):
    X = torch.tensor(X, requires_grad = True)
    disp = net(X)
    u_pred = disp[:,0].reshape(-1,1)
    v_pred = disp[:,1].reshape(-1,1)
    
    du_xy = torch.autograd.grad(u_pred, X, torch.ones_like(u_pred),
                                retain_graph=True, create_graph=True, allow_unused=True)[0].detach().cpu().numpy()
    dv_xy = torch.autograd.grad(v_pred, X, torch.ones_like(v_pred),
                                retain_graph=True, create_graph=True, allow_unused=True)[0].detach().cpu().numpy()
    
    du_x, du_y = du_xy[:,0], du_xy[:,1]
    dv_x, dv_y = dv_xy[:,0], dv_xy[:,1]
    
    sig_xx = ((du_x + nu*dv_y)*E/(1 - nu**2)).reshape(-1,1)
    sig_yy = ((dv_y + nu*du_x)*E/(1 - nu**2)).reshape(-1,1)
    
    return sig_xx, sig_yy
InĀ [17]:
sig_x, sig_y = check_stress(model.net, samples)
pde_sig = np.hstack([sig_x, sig_y])

plt.figure(figsize = (10, 4))
for idx in range(2):
    plt.subplot(1,2,idx+1)
    plt.scatter(samples[:, 0],
                samples[:, 1],
                c = pde_sig[:, idx],
                cmap = 'rainbow',
                s = 5)
    plt.clim(color_legend[idx+2])
    plt.title(title[idx+2], fontsize = 15)
    plt.xlabel('x (mm)', fontsize = 14)
    plt.ylabel('y (mm)', fontsize = 14)
    plt.axis('square')
    plt.xlim((0, a))
    plt.ylim((0, b))
    plt.colorbar()
plt.tight_layout()
plt.savefig('PDE_stress.png', dpi=300, transparent = True)
plt.show()
InĀ [20]:
diagonal_index = []
for i in range(u.shape[0]):
    if loc[i,0] + loc[i,1] == 10:
        diagonal_index.append(i)
diagonal_index = np.array(diagonal_index)
InĀ [21]:
diag_result_true = [u[diagonal_index], v[diagonal_index], stress[diagonal_index, 0], stress[diagonal_index, 1]]
diag_result_pde = [pde_disp[diagonal_index, 0], pde_disp[diagonal_index, 1], pde_sig[diagonal_index, 0], pde_sig[diagonal_index, 1]]

plt.figure(figsize = (9, 8))
for idx in range(4):
    plt.subplot(2,2,idx+1)
    plt.plot(loc[diagonal_index,0], diag_result_true[idx], label = 'FEM')
    plt.plot(loc[diagonal_index,0], diag_result_pde[idx], label = 'PDE')
    plt.xlabel('x (mm)', fontsize = 14)
    plt.ylabel(title[idx], fontsize = 14)
    plt.xlim((0, a))
    plt.legend(fontsize = 12)
plt.tight_layout()
plt.savefig('PDE_diag.png', dpi=300, transparent = True)
plt.show()

2. Data-driven Approach with Big data¶

2.1. Load Data¶

InĀ [23]:
Plate_data = pd.read_csv('./data/Plate_data.csv')
loc = Plate_data.iloc[:, 0:2].to_numpy()
u = Plate_data.iloc[:, 2:3].to_numpy()
v = Plate_data.iloc[:, 3:4].to_numpy()
stress = Plate_data.iloc[:, 4:7].to_numpy()
InĀ [24]:
random.seed(1)
tr_sample = random.sample(range(0,Plate_data.shape[0]),5000)
x_sample = loc[tr_sample,:]
u_sample = u[tr_sample,:]
v_sample = v[tr_sample,:]
InĀ [25]:
observe_u = dde.icbc.PointSetBC(x_sample, u_sample, component=0)
observe_v = dde.icbc.PointSetBC(x_sample, v_sample, component=1)

2.2. Define Geometry¶

InĀ [26]:
geom = dde.geometry.Rectangle(xmin=[0, 0], xmax=[a, b])
data = dde.data.PDE(geom, 
                    None, 
                    [observe_u, observe_v], 
                    num_domain = 0, 
                    num_boundary = 0, 
                    num_test = 100)
InĀ [27]:
plt.figure(figsize = (5,5))
plt.scatter(data.train_x_bc[:,0], data.train_x_bc[:,1], s = 1)
plt.xlabel('x (mm)')
plt.ylabel('y (mm)')
plt.show()

2.3. Define Network and Hyper-parameters¶

InĀ [28]:
layer_size = [2] + [64] * 5 + [2]
activation = "tanh"
initializer = "Glorot uniform"

net = dde.maps.FNN(layer_size, activation, initializer)

model = dde.Model(data, net)
model.compile("adam", lr = 1e-3)
Compiling model...
'compile' took 0.000376 s

2.4. Train (Adam Optimizer)¶

InĀ [29]:
losshistory, train_state = model.train(epochs = 10000)
dde.saveplot(losshistory, train_state, issave = False, isplot = False)
Training model...

Step      Train loss              Test loss               Test metric
0         [7.96e-02, 2.96e-02]    [7.96e-02, 2.96e-02]    []  
1000      [2.99e-06, 5.23e-06]    [2.99e-06, 5.23e-06]    []  
2000      [1.07e-04, 4.40e-05]    [1.07e-04, 4.40e-05]    []  
3000      [9.72e-07, 2.04e-06]    [9.72e-07, 2.04e-06]    []  
4000      [5.78e-07, 1.36e-06]    [5.78e-07, 1.36e-06]    []  
5000      [9.90e-07, 1.92e-06]    [9.90e-07, 1.92e-06]    []  
6000      [2.32e-07, 6.16e-07]    [2.32e-07, 6.16e-07]    []  
7000      [4.50e-06, 4.29e-06]    [4.50e-06, 4.29e-06]    []  
8000      [1.34e-07, 2.34e-07]    [1.34e-07, 2.34e-07]    []  
9000      [1.35e-07, 1.69e-07]    [1.35e-07, 1.69e-07]    []  
10000     [5.98e-08, 1.45e-07]    [5.98e-08, 1.45e-07]    []  

Best model at step 10000:
  train loss: 2.05e-07
  test loss: 2.05e-07
  test metric: []

'train' took 77.293595 s

2.5. Train More (L-BFGS Optimizer)¶

InĀ [30]:
dde.optimizers.config.set_LBFGS_options(maxiter = 5000)
model.compile("L-BFGS")
losshistory, train_state = model.train()
dde.saveplot(losshistory, train_state, issave = False, isplot = True)
Compiling model...
'compile' took 0.000232 s

Training model...

Step      Train loss              Test loss               Test metric
10000     [5.98e-08, 1.45e-07]    [5.98e-08, 1.45e-07]    []  
11000     [4.12e-08, 8.27e-08]    [4.12e-08, 8.27e-08]    []  
12000     [4.03e-08, 8.19e-08]    [4.03e-08, 8.19e-08]    []  
13000     [3.98e-08, 8.13e-08]    [3.98e-08, 8.13e-08]    []  
14000     [3.94e-08, 8.07e-08]    [3.94e-08, 8.07e-08]    []  
15000     [3.91e-08, 8.02e-08]    [3.91e-08, 8.02e-08]    []  

Best model at step 15000:
  train loss: 1.19e-07
  test loss: 1.19e-07
  test metric: []

'train' took 39.065465 s

2.6. Plot Results (Adam + L-BFGS)¶

InĀ [31]:
big_disp = model.predict(samples)
sig_x, sig_y = np.array(check_stress(model.net, samples))
big_sig = np.hstack([sig_x, sig_y])
big_result = np.hstack([big_disp, big_sig])

plt.figure(figsize = (10, 8))
for idx in range(4):
    plt.subplot(2,2,idx+1)
    plt.scatter(samples[:, 0],
                samples[:, 1],
                c = big_result[:,idx],
                cmap = 'rainbow',
                s = 5)
    plt.title(title[idx], fontsize = 15)
    plt.xlabel('x (mm)', fontsize = 14)
    plt.ylabel('y (mm)', fontsize = 14)
    plt.axis('square')
    plt.xlim((0, a))
    plt.ylim((0, b))
    plt.colorbar()
plt.tight_layout()
plt.savefig('Bigdata_disp_stress.png', dpi=300, transparent = True)
plt.show()
InĀ [33]:
diag_result_true = [u[diagonal_index], v[diagonal_index], stress[diagonal_index, 0], stress[diagonal_index, 1]]
diag_result_big = [big_disp[diagonal_index, 0], big_disp[diagonal_index, 1], big_sig[diagonal_index, 0], big_sig[diagonal_index, 1]]

plt.figure(figsize = (9, 8))
for idx in range(4):
    plt.subplot(2,2,idx+1)
    plt.plot(loc[diagonal_index,0], diag_result_true[idx], label = 'FEM')
    plt.plot(loc[diagonal_index,0], diag_result_big[idx], label = 'Big data')
    plt.xlabel('x (mm)', fontsize = 14)
    plt.ylabel(title[idx], fontsize = 14)
    plt.xlim((0, a))
    plt.legend(fontsize = 12)
plt.tight_layout()
plt.savefig('Big_diag.png', dpi=300, transparent = True)
plt.show()

3. Data-driven Approach with Small Data¶

3.1. Load Data¶

InĀ [35]:
random.seed(1)
tr_sample = random.sample(range(0,Plate_data.shape[0]),100)
x_sample = loc[tr_sample,:]
u_sample = u[tr_sample,:]
v_sample = v[tr_sample,:]
InĀ [36]:
observe_u = dde.icbc.PointSetBC(x_sample, u_sample, component=0)
observe_v = dde.icbc.PointSetBC(x_sample, v_sample, component=1)

3.2. Define Geometry¶

InĀ [37]:
geom = dde.geometry.Rectangle(xmin=[0, 0], xmax=[a, b])
data = dde.data.PDE(geom, 
                    None, 
                    [observe_u, observe_v], 
                    num_domain = 0, 
                    num_boundary = 0, 
                    num_test = 100)
InĀ [38]:
plt.figure(figsize = (5,5))
plt.scatter(data.train_x_bc[:,0], data.train_x_bc[:,1], s = 1)
plt.xlabel('x (mm)')
plt.ylabel('y (mm)')
plt.show()

3.3. Define Network and Hyper-parameters¶

InĀ [39]:
layer_size = [2] + [64] * 5 + [2]
activation = "tanh"
initializer = "Glorot uniform"

net = dde.maps.FNN(layer_size, activation, initializer)

model = dde.Model(data, net)
model.compile("adam", lr = 1e-3)
Compiling model...
'compile' took 0.000424 s

3.4. Train (Adam Optimizer)¶

InĀ [40]:
losshistory, train_state = model.train(epochs = 10000)
dde.saveplot(losshistory, train_state, issave = False, isplot = False)
Training model...

Step      Train loss              Test loss               Test metric
0         [1.43e-01, 1.36e-01]    [1.43e-01, 1.36e-01]    []  
1000      [3.46e-06, 5.28e-06]    [3.46e-06, 5.28e-06]    []  
2000      [1.43e-06, 3.01e-06]    [1.43e-06, 3.01e-06]    []  
3000      [7.64e-07, 1.90e-06]    [7.64e-07, 1.90e-06]    []  
4000      [3.88e-06, 2.58e-06]    [3.88e-06, 2.58e-06]    []  
5000      [4.62e-07, 1.25e-06]    [4.62e-07, 1.25e-06]    []  
6000      [5.43e-07, 1.29e-06]    [5.43e-07, 1.29e-06]    []  
7000      [2.98e-07, 7.72e-07]    [2.98e-07, 7.72e-07]    []  
8000      [6.21e-05, 8.34e-05]    [6.21e-05, 8.34e-05]    []  
9000      [1.73e-07, 4.72e-07]    [1.73e-07, 4.72e-07]    []  
10000     [8.61e-08, 2.72e-07]    [8.61e-08, 2.72e-07]    []  

Best model at step 10000:
  train loss: 3.59e-07
  test loss: 3.59e-07
  test metric: []

'train' took 74.336752 s

3.5. Train More (L-BFGS Optimizer)¶

InĀ [41]:
dde.optimizers.config.set_LBFGS_options(maxiter = 5000)
model.compile("L-BFGS")
losshistory, train_state = model.train()
dde.saveplot(losshistory, train_state, issave = False, isplot = True)
Compiling model...
'compile' took 0.000213 s

Training model...

Step      Train loss              Test loss               Test metric
10000     [8.61e-08, 2.72e-07]    [8.61e-08, 2.72e-07]    []  
11000     [8.29e-08, 2.62e-07]    [8.29e-08, 2.62e-07]    []  
12000     [8.23e-08, 2.61e-07]    [8.23e-08, 2.61e-07]    []  
13000     [8.18e-08, 2.60e-07]    [8.18e-08, 2.60e-07]    []  
14000     [8.13e-08, 2.58e-07]    [8.13e-08, 2.58e-07]    []  
15000     [8.08e-08, 2.57e-07]    [8.08e-08, 2.57e-07]    []  

Best model at step 15000:
  train loss: 3.38e-07
  test loss: 3.38e-07
  test metric: []

'train' took 37.468766 s

3.6. Plot Results (Adam + L-BFGS)¶

InĀ [42]:
small_disp = model.predict(samples)
sig_x, sig_y = np.array(check_stress(model.net, samples))
small_sig = np.hstack([sig_x, sig_y])
small_result = np.hstack([small_disp, small_sig])

plt.figure(figsize = (10, 8))
for idx in range(4):
    plt.subplot(2,2,idx+1)
    plt.scatter(samples[:, 0],
                samples[:, 1],
                c = small_result[:,idx],
                cmap = 'rainbow',
                s = 5)
    plt.title(title[idx], fontsize = 15)
    plt.xlabel('x (mm)', fontsize = 14)
    plt.ylabel('y (mm)', fontsize = 14)
    plt.axis('square')
    plt.xlim((0, a))
    plt.ylim((0, b))
    plt.colorbar()
plt.tight_layout()
plt.savefig('Smalldata_disp_stress.png', dpi=300, transparent = True)
plt.show()
InĀ [44]:
diag_result_true = [u[diagonal_index], v[diagonal_index], stress[diagonal_index, 0], stress[diagonal_index, 1]]
diag_result_small = [small_disp[diagonal_index, 0], small_disp[diagonal_index, 1], small_sig[diagonal_index, 0], small_sig[diagonal_index, 1]]

plt.figure(figsize = (9, 8))
for idx in range(4):
    plt.subplot(2,2,idx+1)
    plt.plot(loc[diagonal_index,0], diag_result_true[idx], label = 'FEM')
    plt.plot(loc[diagonal_index,0], diag_result_small[idx], label = 'Small data')
    plt.xlabel('x (mm)', fontsize = 14)
    plt.ylabel(title[idx], fontsize = 14)
    plt.xlim((0, a))
    plt.legend(fontsize = 12)
plt.tight_layout()
plt.savefig('Small_diag.png', dpi=300, transparent = True)
plt.show()

4. PINN with Small Data¶

InĀ [46]:
data = dde.data.PDE(geom, 
                    pde, 
                    [bc_traction, bc_free_xy, bc_free_yx, bc_free_yy, bc_Y0, bc_X0, observe_u, observe_v], 
                    num_domain = 100, 
                    num_boundary = 100,
                    num_test = 100,
                    train_distribution = 'LHS')
InĀ [47]:
plt.figure(figsize = (5,5))
plt.scatter(data.train_x_all[:,0], data.train_x_all[:,1], s = 1)
plt.xlabel('x (mm)')
plt.ylabel('y (mm)')
plt.show()

4.1. Define Network and Hyper-parameters¶

InĀ [48]:
layer_size = [2] + [64] * 5 + [2]
activation = "tanh"
initializer = "Glorot uniform"

net = dde.maps.FNN(layer_size, activation, initializer)

model = dde.Model(data, net)
model.compile("adam", lr = 1e-3, loss_weights=[1, 1, 1, 1, 1, 1, 1, 1, 9, 9])
Compiling model...
'compile' took 0.000208 s

4.2 Train (Adam Optimizer)¶

InĀ [49]:
losshistory, train_state = model.train(epochs = 10000)
dde.saveplot(losshistory, train_state, issave = False, isplot = False)
Training model...

Step      Train loss                                                                                              Test loss                                                                                               Test metric
0         [6.32e-01, 8.81e-01, 2.14e-01, 4.43e-01, 5.35e-01, 4.02e+00, 8.70e-02, 5.98e-02, 1.03e-01, 8.29e-01]    [4.09e-01, 4.76e-01, 2.14e-01, 4.43e-01, 5.35e-01, 4.02e+00, 8.70e-02, 5.98e-02, 1.03e-01, 8.29e-01]    []  
1000      [3.31e-04, 6.39e-04, 1.57e-04, 1.04e-04, 3.12e-05, 5.06e-05, 2.61e-05, 7.91e-06, 1.57e-04, 7.99e-05]    [3.06e-04, 7.43e-04, 1.57e-04, 1.04e-04, 3.12e-05, 5.06e-05, 2.61e-05, 7.91e-06, 1.57e-04, 7.99e-05]    []  
2000      [2.01e-04, 4.05e-04, 1.00e-04, 7.01e-05, 2.31e-05, 4.54e-05, 1.94e-05, 5.34e-06, 1.06e-04, 4.84e-05]    [1.55e-04, 4.26e-04, 1.00e-04, 7.01e-05, 2.31e-05, 4.54e-05, 1.94e-05, 5.34e-06, 1.06e-04, 4.84e-05]    []  
3000      [1.10e-04, 2.81e-04, 5.61e-05, 4.52e-05, 1.83e-05, 3.61e-05, 1.20e-05, 3.44e-06, 5.91e-05, 2.49e-05]    [8.99e-05, 2.92e-04, 5.61e-05, 4.52e-05, 1.83e-05, 3.61e-05, 1.20e-05, 3.44e-06, 5.91e-05, 2.49e-05]    []  
4000      [5.47e-05, 1.61e-04, 1.88e-05, 2.13e-05, 1.19e-05, 1.87e-05, 6.98e-06, 3.02e-06, 2.81e-05, 1.01e-05]    [6.28e-05, 1.88e-04, 1.88e-05, 2.13e-05, 1.19e-05, 1.87e-05, 6.98e-06, 3.02e-06, 2.81e-05, 1.01e-05]    []  
5000      [3.94e-05, 9.55e-05, 1.66e-05, 1.23e-05, 1.11e-05, 1.27e-05, 5.38e-06, 1.77e-06, 1.77e-05, 7.08e-06]    [5.32e-05, 1.12e-04, 1.66e-05, 1.23e-05, 1.11e-05, 1.27e-05, 5.38e-06, 1.77e-06, 1.77e-05, 7.08e-06]    []  
6000      [3.24e-05, 6.30e-05, 2.87e-05, 1.13e-05, 7.30e-06, 3.06e-05, 4.29e-06, 1.00e-06, 1.47e-05, 6.95e-06]    [4.46e-05, 6.89e-05, 2.87e-05, 1.13e-05, 7.30e-06, 3.06e-05, 4.29e-06, 1.00e-06, 1.47e-05, 6.95e-06]    []  
7000      [3.14e-05, 5.12e-05, 3.69e-06, 6.70e-06, 6.04e-06, 6.21e-06, 3.95e-06, 7.66e-07, 1.04e-05, 5.14e-06]    [4.16e-05, 6.08e-05, 3.69e-06, 6.70e-06, 6.04e-06, 6.21e-06, 3.95e-06, 7.66e-07, 1.04e-05, 5.14e-06]    []  
8000      [2.86e-04, 1.04e-04, 1.01e-04, 1.18e-04, 8.75e-06, 8.44e-05, 1.30e-05, 6.66e-05, 1.33e-03, 2.03e-04]    [2.85e-04, 9.55e-05, 1.01e-04, 1.18e-04, 8.75e-06, 8.44e-05, 1.30e-05, 6.66e-05, 1.33e-03, 2.03e-04]    []  
9000      [2.28e-05, 2.61e-05, 8.33e-06, 5.17e-06, 4.34e-06, 3.24e-06, 1.86e-06, 6.62e-07, 7.61e-06, 4.89e-06]    [2.84e-05, 3.22e-05, 8.33e-06, 5.17e-06, 4.34e-06, 3.24e-06, 1.86e-06, 6.62e-07, 7.61e-06, 4.89e-06]    []  
10000     [5.85e-05, 5.26e-05, 3.20e-04, 4.58e-05, 3.23e-05, 6.39e-05, 3.36e-06, 4.60e-06, 1.14e-04, 8.97e-05]    [6.35e-05, 5.75e-05, 3.20e-04, 4.58e-05, 3.23e-05, 6.39e-05, 3.36e-06, 4.60e-06, 1.14e-04, 8.97e-05]    []  

Best model at step 9000:
  train loss: 8.50e-05
  test loss: 9.67e-05
  test metric: []

'train' took 405.641725 s

4.3. Train More (L-BFGS Optimizer)¶

InĀ [50]:
dde.optimizers.config.set_LBFGS_options(maxiter = 5000)
model.compile("L-BFGS")
losshistory, train_state = model.train()
dde.saveplot(losshistory, train_state, issave = False, isplot = True)
Compiling model...
'compile' took 0.000217 s

Training model...

Step      Train loss                                                                                              Test loss                                                                                               Test metric
10000     [5.85e-05, 5.26e-05, 3.20e-04, 4.58e-05, 3.23e-05, 6.39e-05, 3.36e-06, 4.60e-06, 1.27e-05, 9.97e-06]    [6.35e-05, 5.75e-05, 3.20e-04, 4.58e-05, 3.23e-05, 6.39e-05, 3.36e-06, 4.60e-06, 1.27e-05, 9.97e-06]    []  
11000     [3.70e-06, 2.74e-06, 3.66e-07, 1.98e-07, 2.60e-07, 5.79e-07, 9.73e-08, 8.92e-08, 1.61e-07, 2.98e-07]    [5.65e-06, 4.43e-06, 3.66e-07, 1.98e-07, 2.60e-07, 5.79e-07, 9.73e-08, 8.92e-08, 1.61e-07, 2.98e-07]    []  
12000     [2.14e-06, 1.24e-06, 1.17e-07, 1.09e-07, 7.08e-08, 2.03e-07, 2.18e-09, 5.68e-08, 8.25e-08, 1.58e-07]    [4.23e-06, 1.80e-06, 1.17e-07, 1.09e-07, 7.08e-08, 2.03e-07, 2.18e-09, 5.68e-08, 8.25e-08, 1.58e-07]    []  
13000     [1.32e-06, 7.85e-07, 5.86e-08, 6.08e-08, 6.31e-08, 3.23e-07, 1.03e-08, 5.43e-09, 2.34e-08, 5.29e-08]    [2.81e-06, 1.34e-06, 5.86e-08, 6.08e-08, 6.31e-08, 3.23e-07, 1.03e-08, 5.43e-09, 2.34e-08, 5.29e-08]    []  
14000     [1.03e-06, 5.13e-07, 6.46e-08, 4.06e-08, 5.21e-08, 1.92e-07, 3.05e-09, 1.11e-08, 1.44e-08, 1.83e-08]    [1.94e-06, 9.13e-07, 6.46e-08, 4.06e-08, 5.21e-08, 1.92e-07, 3.05e-09, 1.11e-08, 1.44e-08, 1.83e-08]    []  
15000     [1.02e-06, 5.12e-07, 6.70e-08, 4.07e-08, 5.18e-08, 1.92e-07, 3.30e-09, 1.12e-08, 1.48e-08, 1.85e-08]    [1.92e-06, 9.14e-07, 6.70e-08, 4.07e-08, 5.18e-08, 1.92e-07, 3.30e-09, 1.12e-08, 1.48e-08, 1.85e-08]    []  

Best model at step 15000:
  train loss: 1.93e-06
  test loss: 3.24e-06
  test metric: []

'train' took 304.442521 s

4.4. Plot Results (Adam + L-BFGS)¶

InĀ [51]:
pde_small_disp = model.predict(samples)
sig_x, sig_y = np.array(check_stress(model.net, samples))
pde_small_sig = np.hstack([sig_x, sig_y])
pde_small_result = np.hstack([pde_small_disp, pde_small_sig])

plt.figure(figsize = (10, 8))
for idx in range(4):
    plt.subplot(2,2,idx+1)
    plt.scatter(samples[:, 0],
                samples[:, 1],
                c = pde_small_result[:,idx],
                cmap = 'rainbow',
                s = 5)
    plt.title(title[idx], fontsize = 15)
    plt.xlabel('x (mm)', fontsize = 14)
    plt.ylabel('y (mm)', fontsize = 14)
    plt.axis('square')
    plt.xlim((0, a))
    plt.ylim((0, b))
    plt.colorbar()
plt.tight_layout()
plt.savefig('PDEdata_disp_stress.png', dpi=300, transparent = True)
plt.show()
InĀ [53]:
diag_result_true = [u[diagonal_index], v[diagonal_index], stress[diagonal_index, 0], stress[diagonal_index, 1]]
diag_result_pde_small = [pde_small_disp[diagonal_index, 0], pde_small_disp[diagonal_index, 1], pde_small_sig[diagonal_index, 0], pde_small_sig[diagonal_index, 1]]

plt.figure(figsize = (9, 8))
for idx in range(4):
    plt.subplot(2,2,idx+1)
    plt.plot(loc[diagonal_index,0], diag_result_true[idx], label = 'FEM')
    plt.plot(loc[diagonal_index,0], diag_result_pde_small[idx], label = 'PDE+small data')
    plt.xlabel('x (mm)', fontsize = 14)
    plt.ylabel(title[idx], fontsize = 14)
    plt.xlim((0, a))
    plt.legend(fontsize = 12)
plt.tight_layout()
plt.savefig('PDE+small_diag.png', dpi=300, transparent = True)
plt.show()

4.5. Plot with All Results¶

InĀ [64]:
plt.figure(figsize = (9, 8))
for idx in range(4):
    plt.subplot(2,2,idx+1)
    plt.plot(loc[diagonal_index,0], diag_result_true[idx],linewidth = 2, c = 'k' ,label = 'FEM')
    plt.plot(loc[diagonal_index,0], diag_result_pde[idx],  '--' ,linewidth = 2 ,label = 'PDE')
    plt.plot(loc[diagonal_index,0], diag_result_big[idx], '-.' ,linewidth = 2, label = 'Big data')
    plt.plot(loc[diagonal_index,0], diag_result_small[idx], ':' ,linewidth = 2 , label = 'Small data')
    plt.plot(loc[diagonal_index,0], diag_result_pde_small[idx], '.',linewidth = 2 , label = 'PDE+small data')
    plt.xlabel('x (mm)', fontsize = 14)
    plt.ylabel(title[idx], fontsize = 14)
    plt.xlim((0, a))
    plt.legend(fontsize = 11)
plt.tight_layout()
plt.savefig('Total_diag.png', dpi=300, transparent = True)
plt.show()
InĀ [3]:
%%javascript
$.getScript('https://kmahelona.github.io/ipython_notebook_goodies/ipython_notebook_toc.js')